我们研究了改进的多臂匪徒(IMAB)问题,其中从手臂获得的奖励随着收到的拉力数量而增加。该模型为教育和就业等领域中的许多现实世界问题提供了优雅的抽象,在这种领域中,关于机会分配的决定可能会影响社区的未来能力以及它们之间的差异。在这种情况下,决策者必须考虑她的决策对未来奖励的影响,除了随时最大化其累积奖励的标准目标。在许多这些应用中,决策者的时间范围未知,这激发了在技术上更具挑战性的地平线环境中对IMAB问题的研究。我们研究了地平线 - 统一环境中两个看似相互冲突的目标之间产生的紧张:a)根据武器的当前奖励,在任何时候最大化累积奖励,b)确保具有更好的长期奖励的武器获得足够的机会即使他们最初的奖励很低。我们表明,令人惊讶的是,在这种情况下,这两个目标是相互对齐的。我们的主要贡献是对IMAB问题的任何时间算法,它可以获得最佳的累积奖励,同时确保武器在足够的时间内发挥其真正的潜力。由于缺乏机会,我们的算法减轻了最初的差异,并继续拉动手臂直到停止改善。我们通过证明a)imab问题的任何算法来证明我们的算法的最佳性,无论其功利主义,无论多么有效,都必须遭受$ \ omega(t)$政策后悔和$ \ omega(k)$竞争比率相对于最佳的比例离线政策和b)我们算法的竞争比率为$ O(k)$。
translated by 谷歌翻译
人类已经依靠机器将过多的信息减少到可管理的表示形式。但是可以滥用这种依赖 - 战略机器可能会制定操纵用户的表示。用户如何根据战略表示做出很好的选择?我们将其正式化为学习问题,并追求算法来进行操纵。在我们关注的主要环境中,系统将项目的属性表示给用户,后者决定是否消耗。我们通过战略分类的镜头(Hardt等人,2016年)对这种相互作用进行建模,逆转:学习,首先播放的用户;响应的系统排名第二。该系统必须以揭示“除了真理”但不必揭示整个真理的表示形式做出响应。因此,用户在战略子集选择下面临学习设置功能的问题,该选项提出了不同的算法和统计挑战。我们的主要结果是一种学习算法,尽管具有战略代表性,该算法可以最大程度地减少错误,而我们的理论分析阐明了学习工作和操纵易感性之间的权衡。
translated by 谷歌翻译
每年有超过500万五岁以下的儿童死于大部分可预防或可治疗的医疗状况,而在疫苗接种率低的欠发达国家中,死亡人数大部分大部分发生。联合国可持续发展目标之一(SDG 3)旨在结束五岁以下的新生儿和儿童的可预防死亡。我们专注于尼日利亚,在尼日利亚,婴儿死亡率令人震惊。我们与尼日利亚的大型非营利组织Helpmum合作设计和优化了不确定性下的异质健康干预措施的分配,以增加疫苗接种的吸收,这是尼日利亚的首次此类合作。我们的框架,顾问:AI驱动的疫苗接种干预优化器基于整数线性程序,该计划旨在最大程度地提高成功疫苗接种的累积概率。我们的优化公式在实践中是棘手的。我们提出了一种启发式方法,使我们能够解决现实世界中用例的问题。我们还为启发式方法提出了理论界限。最后,我们表明,通过实验评估,所提出的方法在疫苗接种方面优于基线方法。 Helpmum目前正在计划基于我们在最大的尼日利亚城市部署的方法,这将是该国AI驱动的疫苗接种吸收计划的首次部署,并希望为其他数据驱动计划铺平道路改善尼日利亚的健康状况。
translated by 谷歌翻译
人类有自然能够毫不费力地理解语言指挥,如“黄色轿车旁边的公园”,本能地知道车辆的道路的哪个地区应该导航。扩大这种对自主车辆的能力是创建根据人类命令响应和行动的完全自治代理的下一步。为此,我们提出了通过语言命令引用可导航区域(RNR),即导航的接地区域的新任务。 RNR与引用图像分割(RIS)不同,该图像分割(RIS)侧重于自然语言表达式而不是接地导航区域的对象接地。例如,对于指令“黄色轿车旁边的公园,”RIS将旨在分割推荐的轿车,而RNR旨在将建议的停车位分段在道路上分割。我们介绍了一个新的DataSet,talk2car-regseg,它将现有的talk2car数据集扩展,其中包含语言命令描述的区域的分段掩码。提供了一个单独的测试拆分,具有简明的机动指导命令,以评估我们数据集的实用性。我们使用新颖的变换器的架构基准测试所提出的数据集。我们呈现广泛的消融,并在多个评估指标上显示出卓越的性能。基于RNR输出产生轨迹的下游路径规划器确认了所提出的框架的功效。
translated by 谷歌翻译
Deep learning techniques with neural networks have been used effectively in computational fluid dynamics (CFD) to obtain solutions to nonlinear differential equations. This paper presents a physics-informed neural network (PINN) approach to solve the Blasius function. This method eliminates the process of changing the non-linear differential equation to an initial value problem. Also, it tackles the convergence issue arising in the conventional series solution. It is seen that this method produces results that are at par with the numerical and conventional methods. The solution is extended to the negative axis to show that PINNs capture the singularity of the function at $\eta=-5.69$
translated by 谷歌翻译
Abstractive dialogue summarization has long been viewed as an important standalone task in natural language processing, but no previous work has explored the possibility of whether abstractive dialogue summarization can also be used as a means to boost an NLP system's performance on other important dialogue comprehension tasks. In this paper, we propose a novel type of dialogue summarization task - STRUctured DiaLoguE Summarization - that can help pre-trained language models to better understand dialogues and improve their performance on important dialogue comprehension tasks. We further collect human annotations of STRUDEL summaries over 400 dialogues and introduce a new STRUDEL dialogue comprehension modeling framework that integrates STRUDEL into a graph-neural-network-based dialogue reasoning module over transformer encoder language models to improve their dialogue comprehension abilities. In our empirical experiments on two important downstream dialogue comprehension tasks - dialogue question answering and dialogue response prediction - we show that our STRUDEL dialogue comprehension model can significantly improve the dialogue comprehension performance of transformer encoder language models.
translated by 谷歌翻译
A popular approach to creating a zero-shot cross-language retrieval model is to substitute a monolingual pretrained language model in the retrieval model with a multilingual pretrained language model such as Multilingual BERT. This multilingual model is fined-tuned to the retrieval task with monolingual data such as English MS MARCO using the same training recipe as the monolingual retrieval model used. However, such transferred models suffer from mismatches in the languages of the input text during training and inference. In this work, we propose transferring monolingual retrieval models using adapters, a parameter-efficient component for a transformer network. By adding adapters pretrained on language tasks for a specific language with task-specific adapters, prior work has shown that the adapter-enhanced models perform better than fine-tuning the entire model when transferring across languages in various NLP tasks. By constructing dense retrieval models with adapters, we show that models trained with monolingual data are more effective than fine-tuning the entire model when transferring to a Cross Language Information Retrieval (CLIR) setting. However, we found that the prior suggestion of replacing the language adapters to match the target language at inference time is suboptimal for dense retrieval models. We provide an in-depth analysis of this discrepancy between other cross-language NLP tasks and CLIR.
translated by 谷歌翻译
Spatial understanding is a fundamental aspect of computer vision and integral for human-level reasoning about images, making it an important component for grounded language understanding. While recent large-scale text-to-image synthesis (T2I) models have shown unprecedented improvements in photorealism, it is unclear whether they have reliable spatial understanding capabilities. We investigate the ability of T2I models to generate correct spatial relationships among objects and present VISOR, an evaluation metric that captures how accurately the spatial relationship described in text is generated in the image. To benchmark existing models, we introduce a large-scale challenge dataset SR2D that contains sentences describing two objects and the spatial relationship between them. We construct and harness an automated evaluation pipeline that employs computer vision to recognize objects and their spatial relationships, and we employ it in a large-scale evaluation of T2I models. Our experiments reveal a surprising finding that, although recent state-of-the-art T2I models exhibit high image quality, they are severely limited in their ability to generate multiple objects or the specified spatial relations such as left/right/above/below. Our analyses demonstrate several biases and artifacts of T2I models such as the difficulty with generating multiple objects, a bias towards generating the first object mentioned, spatially inconsistent outputs for equivalent relationships, and a correlation between object co-occurrence and spatial understanding capabilities. We conduct a human study that shows the alignment between VISOR and human judgment about spatial understanding. We offer the SR2D dataset and the VISOR metric to the community in support of T2I spatial reasoning research.
translated by 谷歌翻译
We propose EM-PASTE: an Expectation Maximization(EM) guided Cut-Paste compositional dataset augmentation approach for weakly-supervised instance segmentation using only image-level supervision. The proposed method consists of three main components. The first component generates high-quality foreground object masks. To this end, an EM-like approach is proposed that iteratively refines an initial set of object mask proposals generated by a generic region proposal method. Next, in the second component, high-quality context-aware background images are generated using a text-to-image compositional synthesis method like DALL-E. Finally, the third component creates a large-scale pseudo-labeled instance segmentation training dataset by compositing the foreground object masks onto the original and generated background images. The proposed approach achieves state-of-the-art weakly-supervised instance segmentation results on both the PASCAL VOC 2012 and MS COCO datasets by using only image-level, weak label information. In particular, it outperforms the best baseline by +7.4 and +2.8 mAP0.50 on PASCAL and COCO, respectively. Further, the method provides a new solution to the long-tail weakly-supervised instance segmentation problem (when many classes may only have few training samples), by selectively augmenting under-represented classes.
translated by 谷歌翻译
Conditional diffusion probabilistic models can model the distribution of natural images and can generate diverse and realistic samples based on given conditions. However, oftentimes their results can be unrealistic with observable color shifts and textures. We believe that this issue results from the divergence between the probabilistic distribution learned by the model and the distribution of natural images. The delicate conditions gradually enlarge the divergence during each sampling timestep. To address this issue, we introduce a new method that brings the predicted samples to the training data manifold using a pretrained unconditional diffusion model. The unconditional model acts as a regularizer and reduces the divergence introduced by the conditional model at each sampling step. We perform comprehensive experiments to demonstrate the effectiveness of our approach on super-resolution, colorization, turbulence removal, and image-deraining tasks. The improvements obtained by our method suggest that the priors can be incorporated as a general plugin for improving conditional diffusion models.
translated by 谷歌翻译